37 research outputs found

    Face recognition by using discriminative common vectors

    Get PDF
    Abstract In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set

    Face Recognition Based on Videos by Using Convex Hulls

    Get PDF
    International audienceA wide range of face appearance variations can be modeled by using set based recognition approaches effectively, but computational complexity of current methods is highly dependent on the set and class sizes. This paper introduces new video based classification methods designed for reducing the required disk space of data samples and speed up the testing process in large-scale face recognition systems. In the proposed method, image sets collected from videos are approximated with kernelized convex hulls and it was shown that it is sufficient to use only the samples that participate in shaping the image set boundaries in this setting. The kernelized Support Vector Data Description (SVDD) is used to extract those important samples that form the image set boundaries. Moreover, we show that these kernelized hypersphere models can also be used to approximate image sets for classification purposes. Then, we propose a binary hierarchical decision tree approach to improve the speed of the classification system even more. Lastly, we introduce a new video database that includes 285 people with 8 videos of each person since the most popular video data sets used for set based recognition methods include either a few people, or small number of videos per person. The experimental results on varying sized databases show that the proposed methods greatly improve the testing times of the classification system (we obtained speed-ups to a factor of 20) without a significant drop in accuracies

    A Supervised Clustering Algorithm for the Initialization of RBF Neural Network Classifiers

    Get PDF
    International audienceIn this paper, we propose a new supervised clustering algorithm, coined as the Homogeneous Clustering (HC), to find the number and initial locations of the hidden units in Radial Basis Function (RBF) neural network classifiers. In contrast to the traditional clustering algorithms introduced for this goal, the proposed algorithm is a supervised procedure where the number and initial locations of the hidden units are determined based on split of the clusters having overlaps among the classes. The basic idea of the proposed approach is to create class specific homogenous clusters where the corresponding samples are closer to their mean than the means of rival clusters belonging to other class categories. We tested the proposed clustering algorithm along with the RBF network classifier on the Graz02 object database and the ORL face database. The experimental results show that the RBF network classifier performs better when it is initialized with the proposed HC algorithm than an unsupervised k-means algorithm. Moreover, our recognition results exceed the best published results on the Graz02 database and they are comparable to the best results on the ORL face database indicating that the proposed clustering algorithm initializes the hidden unit parameters successfully

    Semi-supervised dimensionality reduction using pairwise equivalence constraints

    Get PDF
    International audienceTo deal with the problem of insufficient labeled data, usually side information -- given in the form of pairwise equivalence constraints between points -- is used to discover groups within data. However, existing methods using side information typically fail in cases with high-dimensional spaces. In this paper, we address the problem of learning from side information for high-dimensional data. To this end, we propose a semi-supervised dimensionality reduction scheme that incorporates pairwise equivalence constraints for finding a better embedding space, which improves the performance of subsequent clustering and classification phases. Our method builds on the assumption that points in a sufficiently small neighborhood tend to have the same label. Equivalence constraints are employed to modify the neighborhoods and to increase the separability of different classes. Experimental results on high-dimensional image data sets show that integrating side information into the dimensionality reduction improves the clustering and classification performance

    Improving Sparse Representation-Based Classification Using Local Principal Component Analysis

    Full text link
    Sparse representation-based classification (SRC), proposed by Wright et al., seeks the sparsest decomposition of a test sample over the dictionary of training samples, with classification to the most-contributing class. Because it assumes test samples can be written as linear combinations of their same-class training samples, the success of SRC depends on the size and representativeness of the training set. Our proposed classification algorithm enlarges the training set by using local principal component analysis to approximate the basis vectors of the tangent hyperplane of the class manifold at each training sample. The dictionary in SRC is replaced by a local dictionary that adapts to the test sample and includes training samples and their corresponding tangent basis vectors. We use a synthetic data set and three face databases to demonstrate that this method can achieve higher classification accuracy than SRC in cases of sparse sampling, nonlinear class manifolds, and stringent dimension reduction.Comment: Published in "Computational Intelligence for Pattern Recognition," editors Shyi-Ming Chen and Witold Pedrycz. The original publication is available at http://www.springerlink.co

    The Ninth Visual Object Tracking VOT2021 Challenge Results

    Get PDF
    acceptedVersionPeer reviewe

    Hyperdisk Based Large Margin Classifiers

    No full text
    International audienceWe introduce a large margin linear binary classification framework that approximates each class with a hyperdisk - the intersection of the affine support and the bounding hypersphere of its training samples in feature space - and then finds the linear classifier that maximizes the margin separating the two hyperdisks. We contrast this with Support Vector Machines (SVMs), which find the maximum-margin separator of the pointwise convex hulls of the training samples, arguing that replacing convex hulls with looser convex class models such as hyperdisks provides safer margin estimates that improve the accuracy on some problems. Both the hyperdisks and their separators are found by solving simple quadratic programs. The method is extended to nonlinear feature spaces using the kernel trick, and multi-class problems are dealt with by combining binary classifiers in the same ways as for SVMs. Experiments on a range of data sets show that the method compares favourably with other popular large margin classifiers
    corecore